PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 30th of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:27] <ryansteckler> Interesting: It looks like doing a query for an array that doesn't contain a certain item doesn't qualify for using the positional operator in the update.
[00:04:21] <ryansteckler> e.g.: given this model: {player: { moves: [ { description: "punch" }, { description: "punch", target: "balls" } ] }
[00:05:19] <ryansteckler> this update fails: findAndModify ({ query: { player.moves.target: { $exists: false } }, update: { target: "face" })
[00:06:13] <ryansteckler> Gives the positional operator needs array in query error
[01:26:38] <testing22_> in a sharded aggregation system, is there any way to distribute the group, etc operations that run on the mongos? spark or something?
[01:46:06] <resting> anyone has any examples on how to use the c driver mongo_create_simple_index? http://api.mongodb.org/c/current/api/mongo_8h.html#a1121273fd827f32c95ee56e90a4e62c5
[01:46:27] <resting> what available options are there?
[03:35:36] <Happy_Meal> Hi, I was wondering how I would implement cursor limit and skip functionality in PHP. I have a database which will eventually get very large and I don't want to return all the results at once. Are skip and limit the best options to ensure this?
[03:36:19] <Happy_Meal> I want to devide the information up into pages for better access.
[03:37:07] <joannac> skip() won't perform well with huge skips
[03:37:21] <joannac> do something like db.foo.find().limit(20)
[03:37:35] <joannac> get to the end of the cursor, say with document _id: 1234
[03:38:21] <joannac> sorry, db.foo.find().sort({_id:1}).limit(20)
[03:38:45] <joannac> then db.foo.find({$gt: {_id: 1234}}).sort({-id:1}).limit(20)
[03:38:46] <joannac> etc
[03:38:58] <joannac> use the last result in your last batch to get the next batch
[03:39:08] <joannac> it'll be fast because of the _id infex
[03:39:10] <joannac> *index
[03:39:56] <Happy_Meal> Ok, thanks very much. I was wondering if the skip would use too many resources or not. Forgive my ignorance. I'm just now taking the DBA's course.
[03:47:06] <resting> i've added an index to the key "timestamp" while there is no data. then i begin adding data. when will the index be built? or is it built on every insert?
[03:54:06] <paulkon> how do the comparison operators work when comparing different types?
[03:54:43] <paulkon> e.g. using $lte against _id with a unix timestamp
[03:55:11] <paulkon> or would I need to convert the timestamp to an objectid beforehand
[08:00:09] <Logicgate> okay so I'm aggregating a collection by a field and counting the results. I need to count only distinct results though
[08:00:18] <Logicgate> is there a way to aggregate distinct results?
[08:01:26] <Logicgate> here's a sample of the data: {type: 'connect', date: <some_date>, ip: <some_ip>}
[08:02:20] <Logicgate> i need to count objects with the "connect" type with distinct ips
[08:02:24] <Logicgate> aggregated by date
[09:47:48] <jajmon> i want to do an upsert like this to add an element to an array: $set: {"values.10": "foo"} works fine, however, in the same document i also want to keep track of the last element in the array. so that i get something like {lastValue: "foo", values: ["bar", "foo"]}. does that make sense? :)
[09:48:09] <jajmon> can i do all of that in a single upsert?
[10:41:10] <kali> yes, and yes
[10:54:35] <jajmon> how :)
[10:55:08] <kali> $set: {"values.10": "foo", lastValue: "foo" }
[10:57:19] <jajmon> that assumes 10 is the last value, though
[11:01:37] <jajmon> if you later want to set values.5 then lastValue should not change
[11:01:44] <jajmon> *values.5
[11:02:32] <kali> you'll need some application side logic
[11:03:11] <kali> and probably some findAndModify too
[11:35:58] <SeptDePique> hello all! one question: i want to use mongodb with bson-pojo-mapping. i dont know where to find good information. which library should i choose?
[11:36:13] <SeptDePique> ah sorry in java
[17:24:06] <stephenmac7> Hey, I'm having an issue with morphia, an ODM for Mongo
[17:24:10] <stephenmac7> https://github.com/mongodb/morphia
[17:24:34] <ron> yes, what is the issue?
[17:24:50] <stephenmac7> The error: http://pastebin.com/qKSYZERV
[17:25:01] <stephenmac7> The code: https://github.com/stephenmac7/Incorporate/blob/master/src/main/java/com/stephenmac/incorporate/UserCommandExecutor.java
[17:26:03] <ron> that's not a morphia issue, that's a classpath issue.
[17:26:25] <ron> your classpath doesn't include com.stephenmac.incorporate.Company
[17:27:02] <ron> unfortunately, I gotta run now. you have 3 options if you don't know how to proceed - a. maybe someone else will be able to help here. b. ask in ##java. c. wait until I return ;)
[17:27:05] <ron> good lucl
[17:27:09] <ron> er, luck
[17:33:59] <stephenmac7> Hey, I'm having issues with my jar file. I'm getting a class not found warning (http://pastebin.com/qKSYZERV)
[17:34:03] <stephenmac7> Sorry, wrong channel
[17:34:13] <stephenmac7> Trying ron's option b
[18:58:47] <ehthegmanack> Hi all im running into some issues with mms backup
[18:59:03] <ehthegmanack> it seems to be stuck on "last sync" starting
[18:59:16] <ehthegmanack> has anyone experienced this before?
[20:16:40] <poorman> anyone have any experience with normalizing ObjectId's into integers for machine learning?
[20:16:56] <poorman> if I simply do a base64 the resulting number is too large for the network
[20:18:56] <poorman> actually base64 wouldn't help anyway, disregard that
[21:07:43] <saml> machine learning?
[21:08:06] <saml> integers?
[21:35:36] <sheki> hey can I use mongos without any config servers, just as a plain proxy.
[21:35:42] <sheki> what are the roadblocks currently?
[21:36:21] <kali> poorman: you can use anything as _id if you don't like objectid
[21:37:42] <poorman> yeah I was thinking about just making them long's to begin with but that could get rather nasty when trying to scale
[21:38:51] <poorman> seems like mahout just maps them: https://builds.apache.org/job/Mahout-Quality/ws/trunk/integration/target/site/apidocs/org/apache/mahout/cf/taste/impl/model/mongodb/MongoDBDataModel.html
[22:22:05] <dccc> i'm having some trouble figuring out $group. i have file: { id: 123, type: 'a', content: '...' }. i want to return files: [ { _id: 123, ...}, { _id: 123, ... }, ...]
[23:21:10] <CupOfCocoa> Hey guys, having some performance trouble: There are two collections, one with ~25M docs and one with ~315k. The smaller docs basically represent groups of the larger collection and I now iterate over the smaller collection and for each group retrieve the docs it represents (which are a list of _id's)
[23:23:32] <CupOfCocoa> The query I use is {"_id":{"$in": [idlist]}} This works well for the first 118k docs from the smaller collection but afterwards mongodb just kind of caves in and uses 100% CPU, eventually delivering the docs but each query literally takes minutes. The smaller collection is loaded beforehand and entirely in Memory at this point, the machine has 16Gb RAM and an SSD, so even if mongodb has to load from disk it should be fairly quick
[23:23:57] <CupOfCocoa> Any ideas what the issue might be?
[23:26:41] <CupOfCocoa> No sharding btw, everything on one machine and I just upgraded to 2.4.8 Top shows mongodb uses 6Gb memory but there are another 6Gb unused