PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 23rd of February, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:30:01] <MLM> How do I find out where the MongoDB msi installer installed with the "Typical" method? I do not see a `C:\mongodb` directory
[02:34:52] <joannac> MLM: where did you tell it to install?
[02:35:34] <bros> I think I'm running into some "atomicity" problem or something.
[02:35:39] <MLM> joannac: There is no option if you click "Typical". I just tried installing again
[02:35:46] <MLM> (unsintalled and reinstalled)
[02:35:46] <bros> What happens when you update a record, while searching if a record like that one exists?
[02:35:59] <bros> Does it disappear from the collection?
[02:36:00] <joannac> probably "Program Files" then? I haven
[02:36:10] <joannac> I haven't installed on Windows, sorry
[02:36:26] <joannac> bros: huh?
[02:36:31] <MLM> The instructions say "C:\mongodb\": http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/
[02:36:47] <bros> joannac, db.barcodes.findOne({ account_id: ObjectId("54ea2cd0be9cd21a552b7a72"), '$or': [ { identifier: '1', barcode: '12' } ] }) returns something
[02:36:51] <bros> but, in my program
[02:36:53] <MLM> But just checked program files and yes it was there "C:\Program Files\MongoDB 2.6 Standard"
[02:36:54] <MLM> THanks
[02:36:56] <bros> the same selector, returns null
[02:37:45] <joannac> MLM: np. the docs say "you can pick where to isntall. We assume you installed to c:\mongodb". Not the saem as "this is the default" :p
[02:38:06] <joannac> bros: what's your app written in? what connector is it using?
[02:38:10] <MLM> I am curious what the default data directory is as well
[02:38:15] <joannac> bros: are you sure it's exactly the same
[02:38:16] <bros> joannac, node. Mongoose.
[02:38:19] <MLM> (with "Typical" install
[02:38:24] <bros> joannac, Yes. I copy and pasted it out of the mongoose debug log.
[02:38:31] <bros> i'm ripping my hair out man. please help me.
[02:38:32] <joannac> MLM: last I checked, the default is usually Program Files for windows?
[02:38:43] <joannac> bros: same database / collection?
[02:38:47] <MLM> Is there a CLI command to check?
[02:38:49] <bros> joannac, yes.
[02:39:36] <joannac> bros: gist the output from the shell
[02:40:17] <bros> https://gist.github.com/anonymous/89e65d1eb5138002fa18
[02:40:35] <bros> lines 21-23
[02:40:40] <joannac> bros: that's not the mongo shell
[02:40:50] <bros> The mongo shell finds the objects.
[02:41:16] <joannac> yes, you already said that. I want to see the output
[02:41:34] <bros> https://gist.github.com/anonymous/89c65f3153cad290ff45
[02:42:22] <joannac> in mongoose, remove everything except the objectid from your query
[02:42:51] <bros> That works.
[02:45:03] <joannac> how are you constructing your query?
[02:45:51] <bros> Does the first gist not show the outcome of my constructed query/
[02:46:19] <joannac> http://mongoosejs.com/docs/api.html#query_Query-or
[02:46:46] <bros> http://mongoosejs.com/docs/api.html#model_Model.findOne
[02:48:07] <joannac> get rid of the $or
[02:48:13] <joannac> does that work?
[02:48:35] <joannac> the $or is useless in that query anyway
[02:48:41] <bros> It does work.
[02:48:50] <bros> I am trying to see if either the barcode, or the identifier is taken.
[02:49:00] <joannac> then your query is wrong
[02:49:08] <bros> Do I need an and?
[02:49:14] <joannac> no?
[02:49:22] <bros> Why does it work on the mongo shell then?
[02:49:22] <joannac> you need to use $or properly
[02:49:34] <joannac> sigh
[02:49:52] <joannac> do you notice how your returned document satisfies both part of your $or ?
[02:49:53] <bros> What am I doing improperly with $or?
[02:50:01] <bros> Yes. In this case, it does.
[02:50:05] <bros> That will not always be the case.
[02:50:17] <bros> barcode and identifier are unique fields. I don't want duplicates.
[02:50:22] <joannac> it will with your query
[02:50:33] <joannac> fix your $or
[02:50:38] <bros> What should it be>
[02:50:44] <joannac> http://docs.mongodb.org/manual/reference/operator/query/or/
[02:50:51] <bros> I have the page open.
[02:50:53] <bros> I still do not understand.
[02:51:18] <joannac> read the whole page. look at the examples
[02:51:26] <bros> I have. 4 times now.
[02:51:38] <bros> Please, I'm begging you. I have no idea what I am doing wrong.
[02:52:43] <joannac> paste the example intoa text editor, and paste your query into a text editor
[02:52:46] <joannac> and look carefully
[02:52:59] <bros> The example doesn't show how to have 1 field always matched, then either one or another.
[02:53:39] <joannac> db.foo.find({a:1, b:1})
[02:53:46] <bros> It's an array of objects
[02:53:48] <joannac> that means a=1 AND b=1, right?
[02:53:51] <bros> not a multi-key object
[02:53:55] <bros> is that the problem?
[02:54:19] <joannac> yes, i think
[02:54:40] <bros> it is. thank you.
[04:32:16] <mordonez> Hi guys, I have a situation with regex search
[04:34:07] <mordonez> http://pastebin.com/fDhBL2sw
[04:34:13] <mordonez> that's exaclty the problem
[04:36:24] <joannac> what version?
[04:40:07] <mordonez> latest
[04:40:40] <mordonez> by the way mongodb is amazing
[04:40:58] <mordonez> I am moving from riak and is absolutely easier to use and friendly
[04:40:59] <joannac> latest what? 2.6.7? 2.4.12? 3.0.0-rc8 ?
[04:41:31] <mordonez> 2.6.7MongoDB shell version: 2.6.6
[04:41:34] <mordonez> MongoDB shell version: 2.6.6
[04:41:38] <mordonez> osx
[04:43:59] <joannac> try without $regex
[04:44:11] <joannac> pretty sure $regex inside $or is not supported
[04:45:37] <mordonez> db["user"].find({"$or":[{"personalProperties.email":"/.*eolivera.*/xi"}]})
[04:45:39] <mordonez> same
[04:50:01] <mordonez> Ok I got it
[04:50:08] <mordonez> the / are wrong
[04:52:41] <mordonez> yes, now works as expected
[04:52:46] <mordonez> thanks for your help anyway
[04:54:50] <joannac> cool
[10:30:14] <h4rdik> Hi
[10:30:17] <h4rdik> I'm new to mongodb. I have a database dump created from a remote server which I would like to restore into my local pc. I have tried mongorestore with no arguments. A bunch of collections got imported but the process aborted with 'bad index key pattern' error.
[10:30:39] <h4rdik> The dump is a folder full of json and bson files
[10:31:08] <h4rdik> How can I import all the collections directly without much hassle?
[10:43:51] <KettleCooked> How much disk space does a mongodb document take? If it contains something in the lines of 1.000 bytes of text and a UUID? Trying to calculate future disk usage with many many small documents.
[10:58:42] <auzty> we must set the role manually right? if user only get readwrite db, it will not able to create db / user right?
[10:59:31] <auzty> i just hesitate, if some user only get readWrite role, can that user create a user / new db ._.
[11:27:55] <arussel> using mongo 2.6, in aggregate, is there a way to cast a string to int to be able to use arithmetic function on it later on ?
[12:01:56] <brano543> Hello, can anyone tell me the difference between document store and graph-oriented storage? I just don´t get it why people think MongoDB is not suitable for graph traversal. The problem i am facing is, if you can get data into memory, you just create some kind of map where you make evidence of all neighbours of current node and everything is fine. But when you can´t fit the data into memory, you just need a database,you have tre
[12:02:20] <brano543> node you are looking for, i thik you also need to do that in neo4j or am i wrong?
[12:07:25] <brano543> i looked how one guy did this using traditional database http://www.dupuis.me/node/27 but thats more complicated in my opinion,because there are a lot of joins so it needs to traverse a lot of data every time to get the corect neighbors.
[12:09:52] <brano543> What i am trying to say,he also preprocesses the data and finally he knows for every node that he can go left or right,but anyways he has to find the id,because he doesn´t keep the whole relationships in memory.
[12:11:20] <brano543> Is here anyone to back me up that it is a good way to chose MongoDB for creating edge-expanded graph?
[12:15:08] <StephenLynx> afaik, mongo can query by using geo location.
[12:15:16] <StephenLynx> have you looked into that?
[12:17:30] <pamp> Hi all
[12:18:47] <pamp> how can i disagregate an array in a collection??
[12:19:00] <brano543> StephenLynx: yes,i did, i was just asking if its correct way to solve it like this {"node_id" : something, neighbours: ["id1":cost1, "id2":"cost2"]} and create index on neighbours.id
[12:19:49] <StephenLynx> Don't know, I never used geolocation nor implemented something like that.
[12:20:06] <StephenLynx> pamp, try $unwind
[12:20:34] <StephenLynx> it is an aggregation stage.
[12:21:18] <pamp> thanks Stephen.. can i copy the result into another collection?
[12:21:22] <brano543> StephenLynx: Forget about geolocatino for a while. I asked you if you can imagine to use MongoDB for representing graph with edges and its traversal
[12:21:36] <pamp> i nedd to re-model the collection
[12:21:41] <StephenLynx> never used an edge graph either :v
[12:21:53] <StephenLynx> pamp sure, but you will need a second query for that.
[12:22:30] <pamp> ok, i will trying that.. thnks
[12:22:38] <StephenLynx> pamp I don't know if its possible to reinsert into the same collection though.
[12:22:50] <pamp> yes, its possible
[12:22:55] <StephenLynx> in the same query?
[12:24:12] <pamp> sorry, misunderstood yout sentece
[12:24:39] <pamp> i dont know too if its possible
[12:24:44] <pamp> i will try
[12:30:17] <brano543> StephenLynx: no,you will need so many queries to traverse along the route until you fight the final destination. Other way may be to create the graph in memory,find the path and information about path get from mongodb.
[12:30:39] <StephenLynx> welp
[12:32:03] <brano543> StephenLynx: it will be slow if i will query mongodb so much,wont it?
[12:32:39] <StephenLynx> Don't know if it will be slower than a SQL query with a lot of joins.
[12:33:08] <StephenLynx> I don't have enough experience with none of the stuff you are dealing to be able to guess that.
[12:34:29] <brano543> StephenLynx: Hmm, i don´t know how will i solve it, i realized it just today that isn´t the best idea to traverse the graph using hundred of queries on mongodb. I guess i will need to use soemthing else for this, maybe redis or something like that.
[12:35:16] <StephenLynx> yeah, probably there is something made for situations like that.
[12:41:23] <GothAlice> KettleCooked: http://bsonspec.org
[14:22:19] <remonvv> \o
[15:46:36] <GothAlice> Hmm. Any way in an aggregate to get the number of fields in a subdocument? My Google Fu is weak this morning.
[15:47:50] <GothAlice> I have something akin to: {_id: ObjectId(…), source: {ObjectId(…): {…}, …}} — I need to know how many keys in "source".
[15:48:19] <kali> GothAlice: a/f i'm afraid
[15:48:28] <kali> GothAlice: i meant m/r :)
[15:49:31] <GothAlice> I'll have to slap a migration on this thing at some point to pivot: source: [{_id: ObjectId(…), …}, …]
[15:49:47] <GothAlice> Oh, nearly two year old code, you give me nostalgia. And make me want to slap myself.
[15:52:26] <StephenLynx> from what I searched you can't use mongo itself to do that, you would have to do that in the application code.
[15:52:54] <GothAlice> Yeah, for now I'll be doing it application-side. (Of all of the queries needed for my "dashboard", this one is the most painful.)
[16:15:30] <pamp> ´Hi all
[16:15:51] <pamp> its possible use a SSD for cache in mongo server?
[16:16:11] <cheeser> why wouldn't it be?
[16:22:55] <mordonez> HI guys, I am running the following query
[16:22:56] <mordonez> {"$and":[{"days.day":{"$gte":54,"$lte":60}},{"days.year":2015}],"deleted":false,"userId":"54ea8aad80a9b862b2734b88"}
[16:23:08] <mordonez> when I run into the shell the result is ok
[16:23:09] <mordonez> nothing
[16:23:36] <mordonez> but when I use it with reactivemongo for some reason it returns values
[16:29:48] <StephenLynx> what is reactive mongo?
[16:39:04] <cheeser> http://reactivemongo.org/
[16:39:10] <cheeser> alternate scala driver
[16:44:11] <kali> with a focus on async
[16:45:08] <cheeser> you mean 'reactive' ;)
[16:50:12] <ernetas> Hey guys.
[16:50:20] <ernetas> Any expected ETA for 3.0?
[16:51:32] <pamp> db.net_top_test.find({"ObjectId":"UtranCell=W46711012"}).forEach(function(d){ var length = d.MO.length; for(var i = 9; i<13;++i){ var v = d.MO[i].props; if(v){ db.net_top_test.update( {"ObjectId":"UtranCell=W46711012"}, { "$set": {"MO.i.P":v}, "$unset": {"MO.i.props":1}} ) } } })
[16:52:10] <pamp> why this query returns cannot use the part (MO of MO.i.P) to traverse the element
[16:53:05] <cromero> that's unreadable. Use pastie.org. See topic
[16:55:48] <StephenLynx> ^
[17:25:42] <pamp> Hi,
[17:26:03] <pamp> anyone knows how to anable SSD cache for mongo server?
[17:27:20] <remonvv> pamp, I'm not even aware such a thing exists.
[17:29:11] <pamp> hmmm i can't see any docs about it, i've an ssd, and i want to use it for cache
[17:30:24] <remonvv> A cache between what and what?
[17:33:31] <pamp> for temporal read from hdd's
[18:16:20] <ernetas> pamp: flashcache, dm-cache, lots of other projects could help you there.
[18:17:29] <pamp> ernetas, thanks for the answer..
[18:17:44] <pamp> i will explore that
[18:19:16] <hahuang61> in my db.currentOp(), I see a bunch of things that look like: https://gist.github.com/hahuang65/c867ad9b26b44db3a19a
[18:19:18] <hahuang61> shoudl I be concerend?
[19:00:40] <FREEZX> hi
[19:00:53] <StephenLynx> sup
[19:01:34] <FREEZX> i've been thinking about map reduce in mongodb
[19:01:55] <FREEZX> i've been trying out couchdb for the past few days
[19:02:27] <FREEZX> and it provides views for keeping mapreduced data
[19:02:51] <FREEZX> and it gets updated automagically when adding new records
[19:03:57] <FREEZX> i think mongodb should have a feature that would allow for new records to get mapreduced automatically
[19:04:14] <FREEZX> if possible, of course
[19:04:27] <cheeser> patches welcome?
[19:04:36] <cheeser> jiras at the least
[19:06:07] <FREEZX> i wanted to discuss here before i go to jira
[19:07:33] <FREEZX> any thoughts?
[19:07:56] <cheeser> map/reduce isn't really a good streaming option
[19:09:24] <FREEZX> true, but it's necessary in many cases
[19:10:42] <FREEZX> and i think that if couchdb has it, and relies on it for everything, it's probably possible in mongodb aswell
[19:13:45] <cheeser> possible. but it's unlikely, i'd think.
[19:18:21] <antitrim> hi! I have a setup with directory per db that got slightly messed up and I'm hoping it's easy to fix
[19:18:49] <antitrim> simply put I have a dir with database 1 foo, and it got moved (while the db was off) to foo_
[19:19:04] <antitrim> then when restarted a new foo was created and foo_ does not show up in my database listing
[19:19:22] <antitrim> is there an easy way to make that foo_ show up with my databases?
[19:19:31] <antitrim> I can move data around, that's fine
[19:23:08] <Nepoxx> is it better to query "not exist OR null", or to not make the field optional, and make it null when unset?
[19:24:21] <dbclk_> hey guys I have an issue with a mongo cluster
[19:24:33] <dbclk_> primary and secondary servers right
[19:24:41] <dbclk_> so all the request are going to primary
[19:24:58] <dbclk_> and all of a suddent secondary complains that it has lost connection to primary
[19:25:02] <dbclk_> it happens intermittently
[19:25:24] <dbclk_> and i'm sure it isn't a network outtage as both are on the same LAN
[19:25:30] <dbclk_> any ideas what could be causing this?
[20:08:10] <fikse> i am looking at https://github.com/kcbanner/connect-mongo
[20:08:27] <fikse> what does `new MongoStore({...})` do exactly?
[20:08:35] <fikse> does that create a table in the db?
[20:08:38] <fikse> does it create a db?
[20:15:34] <StephenLynx> no idea.
[20:16:03] <StephenLynx> notice that you are asking about something that is a library for a framework for a platform
[20:16:20] <StephenLynx> that just happens to be used to interact with mongo.
[20:17:48] <StephenLynx> have you tried just using the driver?
[20:18:17] <fikse> i'm trying to figure out what someone else did
[20:19:49] <StephenLynx> existing project?
[20:19:57] <mordonez> Hi guys
[20:20:01] <StephenLynx> sup
[20:20:07] <mordonez> I am trying to filter in an array with range
[20:20:08] <mordonez> http://pastebin.com/brckm4Px
[20:20:15] <mordonez> but it returns the record
[20:20:19] <mordonez> any ideas?
[20:21:13] <StephenLynx> http://stackoverflow.com/questions/7811163/in-mongodb-how-do-i-find-documents-where-array-size-is-greater-than-1
[20:21:48] <StephenLynx> what he suggests is using either a map-reduce or having a field that tracks the length
[20:21:56] <StephenLynx> you can't actually query for an array size.
[20:22:26] <StephenLynx> I would go with the second method
[20:23:01] <StephenLynx> actually scrap that
[20:23:01] <StephenLynx> http://docs.mongodb.org/manual/reference/operator/query/size/
[20:23:31] <StephenLynx> actually scrap that too because you can only query for specific sizes and not ranges :v
[20:24:06] <StephenLynx> To select documents based on fields with different numbers of elements, create a counter field that you increment when you add elements to a field. mordonez from official docs.
[20:27:05] <mordonez> What I want is to match a row that has a record in days like this { "year" : 2015, "day" : 5, "time" : 0 }
[20:27:15] <mordonez> if it has { "year" : 2015, "day" : 4, "time" : 0 } for example
[20:27:22] <mordonez> I don't wanna take it
[20:32:18] <StephenLynx> oh
[20:32:29] <StephenLynx> you want to query for array elements?
[20:33:30] <mordonez> Let me explain you a little bit more
[20:33:55] <mordonez> the record in pastebing contains in days objects that represents a week
[20:34:10] <mordonez> in that case I have the end of 2014 and start of 2015
[20:34:22] <mordonez> the next week record will contain just 2015 records
[20:34:29] <mordonez> the thing is I want to filter by week
[20:34:46] <mordonez> but the query always returns that record that has both years 2014 and 2015
[20:35:33] <StephenLynx> http://stackoverflow.com/questions/8835757/return-query-based-on-date you can query for dates. but then you would have to record the date properly.
[20:35:35] <mordonez> I suppose filtering with and would be enough
[20:35:40] <mordonez> {"$and":[
[20:35:41] <mordonez> {"days.day":{"$gte":5, "$lte":11}},
[20:35:41] <mordonez> {"days.year":2015}
[20:35:41] <mordonez> ],
[20:35:41] <mordonez> "deleted":false,"userId":"54ea8aad80a9b862b2734b88"}
[20:35:53] <mordonez> that's why I put the and so both must match
[20:36:41] <mordonez> I want to filter something like "give me all records that contains in days a record that has a day >= 5 and day <= 11 and year = 2015"
[20:36:58] <mordonez> if you see the record, none of the days record match
[20:37:08] <mordonez> but I don't know how to represent that in query
[20:37:13] <mordonez> I tried with and
[20:37:22] <mordonez> but still not workin
[20:39:40] <StephenLynx> I think you don't need $and for that.
[20:39:55] <StephenLynx> you can probably query like myarray:{your stuff here separated by commas}
[20:40:25] <StephenLynx> if all else fails you can unwind and group.
[20:41:49] <mordonez> I tried that before, just separate by commas
[20:41:52] <mordonez> but not working
[20:43:36] <mordonez> I tried like this
[20:43:36] <mordonez> {"days.day":{"$gte":5,"$lte":11},"days.year":2015,"deleted":false,"userId":"54ea8aad80a9b862b2734b88"}
[20:43:51] <mordonez> and it returns a record that do not match
[21:04:26] <hahuang61> seeing a ton of context switches and interrupts on our cluster right now, but no slow queries. anything besides the yielding that might cause this?
[22:12:45] <jrbaldwin> anyone no why this aggregate $geonear $text query isn't working http://stackoverflow.com/questions/28684188/mongo-aggregate-geonear-and-text-no-results
[22:18:32] <Boomtime> jrbaldwin: $text requires a text index - and only one index can be used for an aggregation
[22:20:05] <jrbaldwin> Boomtime: i have a text index on $text and a 2dsphere index for $geonear - is there a best practice for this type of query ?
[22:21:37] <Boomtime> only one index can be used for an aggregation
[22:24:34] <Boomtime> jrbaldwin: sorry, but i don't think you can combine $text with geo queries, you can try your $match stages the other way around but i don't think it will work
[22:26:52] <jrbaldwin> Boomtime: thanks! have you heard of an alternative way to reduce the results, or can think of another to do approach it?
[22:27:53] <Boomtime> keywords/tags instead of $text
[22:30:48] <Nepoxx> My TTLs are never expiring, indexes are set correctly though, this is strange...